An amended MaxEnt formulation for deriving Tsallis factors, and associated issues
نویسنده
چکیده
An amended MaxEnt formulation for systems displaced from the conventional MaxEnt equilibrium is proposed. This formulation involves the minimization of the Kullback-Leibler divergence to a reference Q (or maximization of Shannon Q-entropy), subject to a constraint that implicates a second reference distribution P1 and tunes the new equilibrium. In this setting, the equilibrium distribution is the generalized escort distribution associated to P1 and Q. The account of an additional constraint, an observable given by a statistical mean, leads to the maximization of Rényi/Tsallis Q-entropy subject to that constraint. Two natural scenarii for this observation constraint are considered, and the classical and generalized constraint of nonextensive statistics are recovered. The solutions to the maximization of Rényi Q-entropy subject to the two types of constraints are derived. These optimum distributions, that are Levy-like distributions, are selfreferential. We then propose two ‘alternate’ (but effectively computable) dual functions, whose maximizations enable to identify the optimum parameters. Finally, a duality between solutions and the underlying Legendre structure are presented.
منابع مشابه
An Efficient Algorithm for Maximum Tsallis Entropy using Fenchel-duality
We derive a dual-primal recursive algorithm based on the Fenchel duality framework, extending Dykstra’s successive projections and Csiszar’s I-projections schemes, to handle Tsallis MaxEnt. The Tsallis entropy Sq(p) is a one-parameter extension of Shannon’s entropyH(p) in the sense that Sq→1(p) = H(p). The solution of the Tsallis MaxEnt falls under a q-deformed Gibbs distribution which is a pow...
متن کاملSpeed Gradient and MaxEnt principle for Shannon and Tsallis entropies
The notion of entropy is widely used in modern statistical physics, thermodynamics, information theory, engineering etc. In 1948, Claude Shannon introduced his information entropy for an absolutely continuous random variable x having probability density function (pdf) p. In 1988, Constantino Tsallis introduced a generalized Shannon entropy. Tsallis entropy have found applications in various sci...
متن کاملOn Tsallis Entropy Bias and Generalized Maximum Entropy Models
In density estimation task, maximum entropy model (Maxent) can effectively use reliable prior information via certain constraints, i.e., linear constraints without empirical parameters. However, reliable prior information is often insufficient, and the selection of uncertain constraints becomes necessary but poses considerable implementation complexity. Improper setting of uncertain constraints...
متن کاملSpeed Gradient and MaxEnt Principles for Shannon and Tsallis Entropies
In this paper we consider dynamics of non-stationary processes that follow the MaxEnt principle. We derive a set of equations describing dynamics of a system for Shannon and Tsallis entropies. Systems with discrete probability distribution are considered under mass conservation and energy conservation constraints. The existence and uniqueness of solution are established and asymptotic stability...
متن کاملA note on inequalities for Tsallis relative operator entropy
In this short note, we present some inequalities for relative operator entropy which are generalizations of some results obtained by Zou [Operator inequalities associated with Tsallis relative operator entropy, {em Math. Inequal. Appl.} {18} (2015), no. 2, 401--406]. Meanwhile, we also show some new lower and upper bounds for relative operator entropy and Tsallis relative o...
متن کامل